The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
随着卷积神经网络(CNN)在物体识别方面变得更加准确,它们的表示与灵长类动物的视觉系统越来越相似。这一发现激发了我们和其他研究人员询问该含义是否也以另一种方式运行:如果CNN表示更像大脑,网络会变得更加准确吗?以前解决这个问题的尝试显示出非常适中的准确性,部分原因是正则化方法的局限性。为了克服这些局限性,我们开发了一种新的CNN神经数据正常化程序,该数据正常化程序使用深层规范相关分析(DCCA)来优化CNN图像表示与猴子视觉皮层的相似之处。使用这种新的神经数据正常化程序,与先前的最新神经数据正则化器相比,我们看到分类准确性和少级精度的性能提高得多。这些网络对对抗性攻击也比未注册的攻击更强大。这些结果共同证实,神经数据正则化可以提高CNN的性能,并引入了一种获得更大性能提升的新方法。
translated by 谷歌翻译
Topological data analysis (TDA) is an expanding field that leverages principles and tools from algebraic topology to quantify structural features of data sets or transform them into more manageable forms. As its theoretical foundations have been developed, TDA has shown promise in extracting useful information from high-dimensional, noisy, and complex data such as those used in biomedicine. To operate efficiently, these techniques may employ landmark samplers, either random or heuristic. The heuristic maxmin procedure obtains a roughly even distribution of sample points by implicitly constructing a cover comprising sets of uniform radius. However, issues arise with data that vary in density or include points with multiplicities, as are common in biomedicine. We propose an analogous procedure, "lastfirst" based on ranked distances, which implies a cover comprising sets of uniform cardinality. We first rigorously define the procedure and prove that it obtains landmarks with desired properties. We then perform benchmark tests and compare its performance to that of maxmin, on feature detection and class prediction tasks involving simulated and real-world biomedical data. Lastfirst is more general than maxmin in that it can be applied to any data on which arbitrary (and not necessarily symmetric) pairwise distances can be computed. Lastfirst is more computationally costly, but our implementation scales at the same rate as maxmin. We find that lastfirst achieves comparable performance on prediction tasks and outperforms maxmin on homology detection tasks. Where the numerical values of similarity measures are not meaningful, as in many biomedical contexts, lastfirst sampling may also improve interpretability.
translated by 谷歌翻译
科学家经常使用观察时间序列数据来研究从气候变化到民间冲突再到大脑活动的复杂自然过程。但是对这些数据的回归分析通常假定简单的动态。深度学习的最新进展使从语音理解到核物理学再到竞争性游戏的复杂过程模型的表现实现了令人震惊的改进。但是深度学习通常不用于科学分析。在这里,我们通过证明可以使用深度学习,不仅可以模仿,而且可以分析复杂的过程,在保留可解释性的同时提供灵活的功能近似。我们的方法 - 连续时间反向逆转回归神经网络(CDRNN) - 放宽标准简化的假设(例如,线性,平稳性和同质性)对于许多自然系统来说是不可信的,并且可能会严重影响数据的解释。我们评估CDRNNS对人类语言处理,这是一个具有复杂连续动态的领域。我们证明了行为和神经影像数据中预测可能性的显着改善,我们表明CDRNN可以在探索性分析中灵活发现新型模式,在确认分析中对可能的混杂性提供强有力的控制,并打开否则就可以使用这些问题来进行研究,这些问题否则就可以使用这些问题来进行研究,而这些问题否则就可以使用这些问题进行研究,而这些问题否则就可以使用这些问题进行研究。观察数据。
translated by 谷歌翻译
机器学习模型容易记住敏感数据,使它们容易受到会员推理攻击的攻击,其中对手的目的是推断是否使用输入样本来训练模型。在过去的几年中,研究人员产生了许多会员推理攻击和防御。但是,这些攻击和防御采用各种策略,并在不同的模型和数据集中进行。但是,缺乏全面的基准意味着我们不了解现有攻击和防御的优势和劣势。我们通过对不同的会员推理攻击和防御措施进行大规模测量来填补这一空白。我们通过研究九项攻击和六项防御措施来系统化成员的推断,并在整体评估中衡量不同攻击和防御的性能。然后,我们量化威胁模型对这些攻击结果的影响。我们发现,威胁模型的某些假设,例如相同架构和阴影和目标模型之间的相同分布是不必要的。我们也是第一个对从Internet收集的现实世界数据而不是实验室数据集进行攻击的人。我们进一步研究是什么决定了会员推理攻击的表现,并揭示了通常认为过度拟合水平不足以成功攻击。取而代之的是,成员和非成员样本之间的熵/横向熵的詹森 - 香农距离与攻击性能的相关性更好。这为我们提供了一种新的方法,可以在不进行攻击的情况下准确预测会员推理风险。最后,我们发现数据增强在更大程度上降低了现有攻击的性能,我们提出了使用增强作用的自适应攻击来训练阴影和攻击模型,以改善攻击性能。
translated by 谷歌翻译
现有的对抗示例研究重点是在现有自然图像数据集之上进行数字插入的扰动。这种对抗性例子的构造是不现实的,因为攻击者由于感应和环境影响而在现实世界中部署这种攻击可能是困难的,甚至是不可能的。为了更好地理解针对网络物理系统的对抗性示例,我们提出了通过模拟近似现实世界的。在本文中,我们描述了我们的合成数据集生成工具,该工具可以可扩展收集具有现实的对抗示例的合成数据集。我们使用Carla模拟器收集此类数据集并演示与现实世界图像相同的环境变换和处理的模拟攻击。我们的工具已用于收集数据集以帮助评估对抗性示例的功效,并可以在https://github.com/carla-simulator/carla/pull/4992上找到。
translated by 谷歌翻译
对最先进的机器学习模型的对抗性攻击对关键任务自治系统的安全和安全构成了重大威胁。本文考虑了机器学习模型的其他脆弱性,当攻击者可以衡量其基础硬件平台的功耗。特别是,我们探讨了对非易失性存储器横杆单层神经网络的对抗性攻击的功耗信息的实用性。我们从MNIST和CIFAR-10数据集进行的实验结果表明,功耗可以揭示有关神经网络的重量矩阵的重要信息,例如其列的1符号。该信息可用于推断网络损失相对于不同输入的敏感性。我们还发现,利用横杆力量信息的基于替代的黑匣子攻击可以提高攻击效率。
translated by 谷歌翻译
Benchmarking the tradeoff between neural network accuracy and training time is computationally expensive. Here we show how a multiplicative cyclic learning rate schedule can be used to construct a tradeoff curve in a single training run. We generate cyclic tradeoff curves for combinations of training methods such as Blurpool, Channels Last, Label Smoothing and MixUp, and highlight how these cyclic tradeoff curves can be used to evaluate the effects of algorithmic choices on network training efficiency.
translated by 谷歌翻译
基于知识的实体预测(KEP)是一项新任务,旨在改善自主系统中的机器感知。KEP在预测潜在的未识别实体方面利用了来自异质来源的关系知识。在本文中,我们将KEP的正式定义作为知识完成任务。然后引入了三种潜在的解决方案,这些解决方案采用了几种机器学习和数据挖掘技术。最后,在来自不同域的两个自主系统上证明了KEP的适用性。即自动驾驶和智能制造。我们认为,在复杂的现实世界系统中,使用KEP将显着改善机器的感知,同时将当前技术推向实现完全自治的一步。
translated by 谷歌翻译